54 research outputs found

    An empirical framework for user mobility models: Refining and modeling user registration patterns

    Get PDF
    AbstractIn this paper, we examine user registration patterns in empirical WLAN traces, identify elusive patterns that are abused as user movements in constructing empirical mobility models, and analyze them to build up a realistic user mobility model. The examination shows that about 38–90% of transitions are irrelevant to actual user movements. In order to refine the elusive movements, we investigate the geographical relationships among APs and propose a filtering framework for removing them from the trace data. We then analyze the impact of the false-positive movements on an empirical mobility model. The numerical results indicate that the proposed framework improves the fidelity of the empirical mobility model. Finally, we devise an analytical model for characterizing realistic user movements, based on the analysis on the elusive user registration patterns, which emulates elusive user registration patterns and generates true user mobile patterns

    Enabling Theoretical Model Based Techniques for Simulating Large Scale Networks

    Get PDF
    Modern data communication networks are extremely complex and do not lend well to theoretical analysis. It is not unusual that network analysis can be rigorously made after leaving out several subtle details that cannot be easily captured in the analysis. As a result, packet mode, event driven simulation studies are usually resorted to better study the performance of network components, protocols, and their interaction. The major obstacle in packet mode simulation is, however, the vast number of packets that have to be simulated in order to produce accurate results, especially in large scale networks. What seems to be a reasonable solution is really to incorporate theoretical modeling into packet mode simulation. The notion of fluid model based simulation is recently proposed to alleviate the computational overhead in packet mode simulation. Conceptually, a fluid model is developed and incorporated into the simulation engine. In the course of simulation, a sequence of closely-spaced packets are abstracted into a fluid information, and the fluid model is used to determine its behavior. As the first theme, we investigate whether or not the fluid model based simulation is effective in simulating IEEE 802.11-operated wireless LANs, and to develop a fast simulation framework to expedite simulation, while not compromising the fidelity of simulation results. In spite of its effectiveness in terms of reducing the execution time, fluid model based simulation is not well-suited for studying the network behavior under light and/or sporadic traffic, as it is built upon the assumption of a large number of active flows in the network. To address the issue, we contrive network calculus based simulation as another main theme in the thesis. We firstly characterize how TCP congestion control interacts with AQM strategies with network calculus theory, and then determine a set of scheduling rules to regulate TCP traffic, and finally incorporate the rules into a network simulation engine to improve simulation performance. Although both fluid model based simulation and network calculus based simulation indeed give encouraging results, they cannot provide the packet level dynamics, such as the instantaneous queue length and packet dropping probability, due to the use of abstract simulation units, i.e., fluid rate and traffic amount. In order to address this issue, we propose hybrid simulation techniques, called mixed mode simulation, to discover packet level details of one packet mode, foreground flow, approximating all the other flows with theoretical model based, background flows. In the mixed mode framework, packet mode simulation co-exists with theoretical model based simulation within one simulation framework, and therefore analytical models of specifying the interactions should be devised. Lastly, we also contrive a new rescaling simulation methodology (RSM) to simulate large scale IP networks with TCP and/or UDP traffic. Even though mixed mode simulation can produce packet level details, there exist the cases that all the network behaviors, inclusive of all the flows and all the networking points, should be inspected. The underlying idea of RSM based simulation is to reduce the computational cost by scaling down the network to tractable one that can be simulated for a short time interval to produce sufficient results at packet mode, and then to extrapolate the results expected from the original network with the obtained those. In order to give a guideline for both down- and up-scaling or to explain how to preserve the network properties unique in the original network within the down-scaled network, a rescaling model is contrived and presented

    Mixed Mode Simulation for IEEE 802.11-operated WLANs: Integration of Packet Mode and Fluid Model Based Simulation

    Get PDF
    In this paper, we address the issue of integrating packet level simulation with fluid model based simulation for IEEE 802.11-operated wireless LANs (WLANs), so as to combine the performance gains of the latter with the accuracy and packet level details afforded by the former. In mixed mode simulation, foreground flow is simulated at the packet level, while the other background flows are approximated into a collection of fluid chunks and simulated in the fluid mode. Note that these two types of flows influence each other at the point of interaction, e.g. the wireless channel in a WLAN. In order to realize mixed mode simulation, we develop the model of interaction at the wireless channel between the foreground flow and the other background flows, in view of their achievable throughput. We then implement mixed mode simulation in ns-2, and conduct a comprehensive simulation study to evaluate mixed mode simulation with respect to accuracy (in terms of error discrepancy) and efficiency (in terms of speed-up in conducting simulation). Simulation results indicate that for IEEE 802.11-operated WLANs, it is feasible to blend fluid model based simulation into packet level simulation, and the performance improvement is quite significant while the accuracy and the packet level details desired are not compromised. Specifically, mixed mode simulation incurs only approximately 2 % of the error discrepancy, and reduces the execution time by two orders of magnitude. This, coupled with the fact that mixed-mode simulation is able to retain packet level details for the connection of interest, makes it an excellent candidate for carrying out large-scale simulation for IEEE 802.11-operated WLANs

    An Analysis of the Binary Exponential Backoff Algorithm in Distributed MAC Protocols

    Get PDF
    In the paper, we perform an in-depth analytic study of the binary exponential algorithm (BEBA) that is widely used in distributed MAC protocols, for example, IEEE 802.11 DCF. We begin with a generalized framework of modeling BEBA. Then we identify a key difference between BEBA and the commonly-assumed p-persistent model: due to the characteristics of BEBA, the slot succeeding a busy period has a different contention rate from the other slots. This causes access to a slot to be non-uniform and dependent on whether or not the slot immediately follows a busy period. We propose a detailed model with the use of a Markov chain to faithfully describe the channel activities governed by BEBA. To reduce the computational complexity, we simplify the model to an approximate one, and conduct an extensive simulation study. The analytical results derived in the proposed model are compared against those obtained from two other representative models. It is demonstrated that the proposed model is an accurate characterization of the BEBA algorithm in a broader range of system configuration. We further investigate the impact of the stochastic property of the backoff time, r, on the performance. It is revealed that in certain circumstances it becomes an important factor that affects the performance. A case study shows that by shifting the distribution range of r merely by 1 slot, substantial degradation in the system throughput may result

    Network Invariant-Based Fast Simulation for Large Scale TCP/IP Networks

    Get PDF
    In this paper, we present a rescaling simulation methodology (RSM) to expedite simulation in large-scale TCP/IP networks without loss of fidelity of simulation results. Conceptually, we scale down the network to be simulated to reduce the number of events, simulate the downscaled network for a short period of time, and then extrapolate the corresponding results for the original network by scaling up the simulation results obtained from the downscaled network. Both the operations of scaling down and rescaling up the network are conducted in such a manner that the network invariant, called the bandwidth-delay product, is preserved. In particular, since the dynamics of queues, such as the queue size, the dropping probability, and other parameters at every link are the same in both the original and downscaled network, RSM can accurately infer the network dynamics behavior of the original network (equipped with various Active Queue Management (AQM) strategies). In contrast to SHRiNK, RSM does not make any assumption on the input traffic and can work with any AQM strategy. It also preserves the queue dynamics and the network capacity as perceived by TCP connections. To validate the proposed methodology, we have implemented RSM based simulation in ns-2, and conducted a simulation study comparing RSM based simulation against packet level simulation, with respect to the capability of capturing transient, packet-level network dynamics, the execution time and the discrepancy in simulation results. The simulation results indicate an order of magnitude or more improvement (maximally 50 times) in execution time and the performance improvement becomes more prominent as the network size increases (in terms of number of nodes and network capacity) or as the scaling parameter decreases. The error discrepancy between RSM based simulation and packet level simulation, on the contrary, is minimally 1-2 % and maximally 10 % in a wide variety of network topologies (with various AQM strategies) and traffic loads. The encouraging simulation results, coupled with the fact that implementation of RSM is simple and straightforward, suggest that RSM can be used to simulate, and accurately infer network dynamics of, large-scale TCP/IP networks

    Enabling Theoretical Model Based Techniques for Simulating Large Scale Networks

    No full text
    170 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2004.We develop a fluid model of describing the data transmission activities in IEEE 802.11-operated WLANs, and used it to explore whether or not fluid model-based simulation is effective in simulating WLANs. Fluid model based simulation is not well-suited for studying the network behavior under light and/or sporadic traffic, as it assumes a large number of flows in networks. To address the issue, we introduce the notion of network calculus based simulation; we characterize the interaction between TCP and AQM, determine necessary scheduling rules to regulate TCP traffic, and incorporate the rules into a simulation engine. Although both fluid model based and network calculus based simulation give encouraging results in terms of the execution time, they cannot provide packet level details, such as the instantaneous queue length and packet dropping probability, due to the use of larger simulation units. In order to provide the packet level dynamics, we propose mixed mode simulation. In mixed mode simulation, packet mode simulation co-exists with theoretical model-based simulation within one simulation framework. We also propose a new rescaling simulation methodology (RSM) to simulate IP networks with TCP and/or UDP traffic for the cases that the behaviors of all the flows should be inspected. The underlying idea of RSM based simulation is to reduce the computation by scaling down the network to one that can be simulated in a short time to produce sufficient results, and then to extrapolate the results for the original network with the results obtained from the smaller network.U of I OnlyRestricted to the U of I community idenfinitely during batch ingest of legacy ETD

    Provisioning Quality Controlled Medium Access in UltraWideBand (UWB) WPANs

    Get PDF
    Quality of service (QoS) provisioning is one of the most important criteria in newly emerging UWB-operated WPANs, as they are expected to support a wide variety of applications from time-constrained, multimedia streaming to throughput-hungry, content transfer applications. As such, the Enhanced Distributed Coordinated Access (EDCA) mechanism has been adopted by MultiBand OFDM Alliance in its UWB MAC proposal. In this paper, we conduct a rigorous, comprehensive, theoretical analysis and have shown that with the currently recommended parameter setting, EDCA cannot provide adequate QoS. In particular, without responding to the system dynamics (e.g., taking into account of the number of active class-i stations), EDCA cannot allocate bandwidth in a deterministic proportional manner and the system bandwidth is under-utilized. After identifying the deficiency of EDCA, we propose, in compliance with the EDCA-incorporated UWB MAC protocol proposed in [18] [22] [23], a framework, along with a set of theoretically grounded methods for controlling medium access with deterministic QoS for UWB networks. We show that in this framework, 1) real-time traffic is guaranteed of deterministic bandwidth via a contention-based reservation access method; 2) best-effort traffic is provided with deterministic proportional QoS; and moreover, 3) the bandwidth utilization is maximized. We have also validated and evaluated the QoS provisioning capability and practicality of the proposed MAC framework both via simulation and empirically by leveraging the MADWifi (Multiband Atheros Driver for WiFi) Linux driver for Wireless LAN devices with the Atheros chipset)

    Enabling Theoretical Model Based Techniques for Simulating Large Scale Networks

    No full text
    170 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2004.We develop a fluid model of describing the data transmission activities in IEEE 802.11-operated WLANs, and used it to explore whether or not fluid model-based simulation is effective in simulating WLANs. Fluid model based simulation is not well-suited for studying the network behavior under light and/or sporadic traffic, as it assumes a large number of flows in networks. To address the issue, we introduce the notion of network calculus based simulation; we characterize the interaction between TCP and AQM, determine necessary scheduling rules to regulate TCP traffic, and incorporate the rules into a simulation engine. Although both fluid model based and network calculus based simulation give encouraging results in terms of the execution time, they cannot provide packet level details, such as the instantaneous queue length and packet dropping probability, due to the use of larger simulation units. In order to provide the packet level dynamics, we propose mixed mode simulation. In mixed mode simulation, packet mode simulation co-exists with theoretical model-based simulation within one simulation framework. We also propose a new rescaling simulation methodology (RSM) to simulate IP networks with TCP and/or UDP traffic for the cases that the behaviors of all the flows should be inspected. The underlying idea of RSM based simulation is to reduce the computation by scaling down the network to one that can be simulated in a short time to produce sufficient results, and then to extrapolate the results for the original network with the results obtained from the smaller network.U of I OnlyRestricted to the U of I community idenfinitely during batch ingest of legacy ETD

    DAG-Based Distributed Ledger for Low-Latency Smart Grid Network

    No full text
    In this paper, we propose a scheme that implements a Distributed Ledger Technology (DLT) based on Directed Acyclic Graph (DAG) to generate, validate, and confirm the electricity transaction in Smart Grid. The convergence of the Smart Grid and distributed ledger concept has recently been introduced. Since Smart Grids require a distributed network architecture for power distribution and trading, the Distributed Ledger-based Smart Grid design is a spotlighted research domain. However, only the Blockchain-based methods, which are a type of the distributed ledger scheme, are currently either being considered or adopted in the Smart Grid. Due to computation-intensive consensus schemes such as Proof-of-Work and discrete block generation, Blockchain-based distributed ledger systems suffer from efficiency and latency issues. We propose a DAG-based distributed ledger for Smart Grids, called PowerGraph, to resolve this problem. Since a DAG-based distributed ledger does not need to generate blocks for confirmation, each transaction of the PowerGraph undergoes the validation and confirmation process individually. In addition, transactions in PowerGraph are used to keep track of the energy trade and include various types of transactions so that they can fully encompass the events in the Smart Grid network. Finally, to ensure that PowerGraph maintains a high performance, we modeled the PowerGraph performance and proposed a novel consensus algorithm that would result in the rapid confirmation of transactions. We use numerical evaluations to show that PowerGraph can accelerate the transaction processing speed by over 5 times compared to existing DAG-based DLT system

    DAGmap: Multi-Drone SLAM via a DAG-Based Distributed Ledger

    No full text
    Simultaneous localization and mapping (SLAM) in unmanned vehicles, such as drones, has great usability potential in versatile applications. When operating SLAM in multi-drone scenarios, collecting and sharing the map data and deriving converged maps are major issues (regarded as the bottleneck of the system). This paper presents a novel approach that utilizes the concepts of distributed ledger technology (DLT) for enabling the online map convergence of multiple drones without a centralized station. As DLT allows each agent to secure a collective database of valid transactions, DLT-powered SLAM can let each drone secure global 3D map data and utilize these data for navigation. However, block-based DLT—a so called blockchain—may not fit well to the multi-drone SLAM due to the restricted data structure, discrete consensus, and high power consumption. Thus, we designed a multi-drone SLAM system that constructs a DAG-based map database and sifts the noisy 3D points based on the DLT philosophy, named DAGmap. Considering the differences between currency transactions and data constructions, we designed a new strategy for data organization, validation, and a consensus framework under the philosophy of DAG-based DLT. We carried out a numerical analysis of the proposed system with an off-the-shelf camera and drones
    • …
    corecore